Key-Value Combination for Scholarship Disbursal Analysis through Map-Reduce
نویسندگان
چکیده
منابع مشابه
Enhancing KiWi - Scalable Concurrent Key-Value Map
We take a relatively fresh wait-free, concurrent sorted map called KiWi, fix and enhance it. First, we test its linearizability by fuzzing and applying Wing&Gong [2] linearizability test. After fixing a few bugs in the algorithm design and its implementation, we enhance it. We design, implement and test tw...
متن کاملPromoting Scholarship through Design
How can new media positively transform scholarly practices? One possible way is for scholarly archives such as e-journals and digital libraries to better support the needs and practices of their users, instead of the publishing process. This paper examines what it might mean to promote cognitive and social aspects of ‘scholarship’ through innovative archive design. Design requirements for suppo...
متن کاملCloud Hadoop Map Reduce For Remote Sensing Image Analysis
Image processing algorithms related to remote sensing have been tested and utilized on the Hadoop MapReduce parallel platform by using an experimental 112-core high-performance cloud computing system that is situated in the Environmental Studies Center at the University of Qatar. Although there has been considerable research utilizing the Hadoop platform for image processing rather than for its...
متن کاملClassification Algorithms for Big Data Analysis, a Map Reduce Approach
Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), whic...
متن کاملAnalysing Distributed Big Data through Hadoop Map Reduce
This term paper focuses on how the big data is analysed in a distributed environment through Hadoop Map Reduce. Big Data is same as “small data” but bigger in size. Thus, it is approached in different ways. Storage of Big Data requires analysing the characteristics of data. It can be processed by the employment of Hadoop Map Reduce. Map Reduce is a programming model working parallel for large c...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Physics: Conference Series
سال: 2021
ISSN: 1742-6588,1742-6596
DOI: 10.1088/1742-6596/1714/1/012002